37 research outputs found

    Sensor Planning and Control in a Dynamic Environment

    Get PDF
    This paper presents an approach to the problem of controlling the configuration of a team of mobile agents equipped with cameras so as to optimize the quality of the estimates derived from their measurements. The issue of optimizing the robots\u27 configuration is particularly important in the context of teams equipped with vision sensors since most estimation schemes of interest will involve some form of triangulation. We provide a theoretical framework for tackling the sensor planning problem and a practical computational strategy, inspired by work on particle filtering, for implementing the approach. We extend our previous work by showing how modeled system dynamics and configuration space obstacles can be handled. These ideas have been demonstrated both in simulation and on actual robotic platforms. The results indicate that the framework is able to solve fairly difficult sensor planning problems online without requiring excessive amounts of computational resources

    A Framework and Architecture for Multi-Robot Coordination

    Get PDF
    In this paper, we present a framework and the software architecture for the deployment of multiple autonomous robots in an unstructured and unknown environment with applications ranging from scouting and reconnaissance, to search and rescue and manipulation tasks. Our software framework provides the methodology and the tools that enable robots to exhibit deliberative and reactive behaviors in autonomous operation, to be reprogrammed by a human operator at run-time, and to learn and adapt to unstructured, dynamic environments and new tasks, while providing performance guarantees. We demonstrate the algorithms and software on an experimental testbed that involves a team of car-like robots using a single omnidirectional camera as a sensor without explicit use of odometry

    Sensor Fusion Techniques for Cooperative Localization in Robot Teams

    No full text
    A fundamental capability for autonomous robot operations is localization—that is, the ability of a robot to estimate its position in the environment. It is a base level capability enabling numerous other technologies including mapping, manipulation, and target tracking. Within this realm of research, there is a more recent and narrower focus on Cooperative Localization (CL) for robot teams. In this paradigm, groups of robots combine sensor measurements to improve localization performance. The focus of this dissertation is applying sensor fusion techniques to the CL problem. We are particularly interested in employing teams of robots in target tracking roles. This has motivated our own solution to CL which is capable of solving the Simultaneous Localization and Target Tracking problem. Under this paradigm, targets are merely viewed as “passive” team members that must be localized. The benefit of such an approach is that although the targets do not contribute measurements, they still contribute constraints to the localization process. We assume an unknown-but-bounded model for sensor noise, whereby bearing and range measurements are modeled as linear constraints on the configuration space of the robot team. Merging these constraints induces a convex polytope representing the set of all configurations consistent with sensor measurements. Estimates for the uncertainty in the absolute position of a single robot or the relative positions of two or more nodes can then be obtained by projecting this polytope onto appropriate subspaces of the configuration space. While recovering the exact projection can require exponential time, we propose a novel method for approximating these projections using linear programming techniques. We then further extend current localization methods to the problem of active target tracking. Recall that in the CL model, pose estimates are formed by combining information from multiple distributed sensors. This capability invites the following question: given that the robot platforms are mobile, how should they be deployed in order to maximize the quality of the estimates returned by the team? We present a generic theoretical framework, and practical computational approaches for tackling this problem

    Downloaded from

    No full text
    In this paper, we present an approach to the problem of actively controlling the configuration of a team of mobile agents equipped with cameras so as to optimize the quality of the estimates derived from their measurements. The issue of optimizing the robots ’ configuration is particularly important in the context of teams equipped with vision sensors, since most estimation schemes of interest will involve some form of triangulation. We provide a theoretical framework for tackling the sensor planning problem, and a practical computational strategy inspired by work on particle filtering for implementing the approach. We then extend our framework by showing how modeled system dynamics and configuration space obstacles can be handled. These ideas have been applied to a target tracking task, and demonstrated both in simulation and with actual robot platforms. The results indicate that the framework is able to solve fairly difficult sensor planning problems online without requiring excessive amounts of computational resources. KEY WORDS—optimal target tracking, sensor fusion, particle filtering 1

    On-line Calibration of Multiple LIDARs on a Mobile Vehicle Platform

    No full text
    Abstract — In this paper, we examine the problem of extrinsic calibration of multiple LIDARs on a mobile vehicle platform. To achieve fully automated and on-line calibration, the original non-linear calibration model is reformulated as a second-order cone program (SOCP). This provides an advantage over more standard linearized approaches in that a priori information such as a default LIDAR calibration, calibration tolerances, etc., can be readily modeled. Furthermore, in contrast to general non-linear methods, the SOCP relaxation is convex, returns a global minimum, and can be solved very quickly using modern interior point methods (IPM). This enables the calibration to be estimated on-line for multiple LIDARs simultaneously. Experimental results are provided where the approach is used to successfully calibrate a pair of Sick LMS291-S14 LIDARs mounted on a mobile vehicle platform. These showed the SOC
    corecore